79 research outputs found

    Response Coordination Emerges in Cooperative but Not Competitive Joint Task

    Get PDF
    Effective social interactions rely on humans’ ability to attune to others within social contexts. Recently, it has been proposed that the emergence of shared representations, as indexed by the Joint Simon effect (JSE), might result from interpersonal coordination (Malone et al., 2014). The present study aimed at examining interpersonal coordination in cooperative and competitive joint tasks. To this end, in two experiments we investigated response coordination, as reflected in instantaneous cross-correlation, when co-agents cooperate (Experiment 1) or compete against each other (Experiment 2). In both experiments, participants performed a go/no-go Simon task alone and together with another agent in two consecutive sessions. In line with previous studies, we found that social presence differently affected the JSE under cooperative and competitive instructions. Similarly, cooperation and competition were reflected in co-agents response coordination. For the cooperative session (Experiment 1), results showed higher percentage of interpersonal coordination for the joint condition, relative to when participants performed the task alone. No difference in the coordination of responses occurred between the individual and the joint conditions when co-agents were in competition (Experiment 2). Finally, results showed that interpersonal coordination between co-agents implies the emergence of the JSE. Taken together, our results suggest that shared representations seem to be a necessary, but not sufficient, condition for interpersonal coordination

    Action intentions modulate allocation of visual attention: electrophysiological evidence

    Get PDF
    In line with the Theory of Event Coding (Hommel et al., 2001), action planning has been shown to affect perceptual processing - an effect that has been attributed to a so-called intentional weighting mechanism (Wykowska et al., 2009; Hommel, 2010). This paper investigates the electrophysiological correlates of action-related modulations of selection mechanisms in visual perception. A paradigm combining a visual search task for size and luminance targets with a movement task (grasping or pointing) was introduced, and the EEG was recorded while participants were performing the tasks. The results showed that the behavioral congruency effects, i.e., better performance in congruent (relative to incongruent) action-perception trials have been reflected by a modulation of the P1 component as well as the N2pc (an ERP marker of spatial attention). These results support the argumentation that action planning modulates already early perceptual processing and attention mechanisms

    Sources and time course of mechanisms biasing visual selection

    Get PDF

    Imaging when acting: picture but not word cues induce action-related biases of visual attention

    Get PDF
    In line with the Theory of Event Coding (Hommel et al., 2001a), action planning has been shown to affect perceptual processing an effect that has been attributed to a so-called intentional weighting mechanism (Wykowska et al., 2009; Memelink and Hommel, 2012), whose functional role is to provide information for open parameters of online action adjustment (Hommel, 2010). The aim of this study was to test whether different types of action representations induce intentional weighting to various degrees. To meet this aim, we introduced a paradigm in which participants performed a visual search task while preparing to grasp or to point. The to-be performed movement was signaled either by a picture of a required action or a word cue. We reasoned that picture cues might trigger a more concrete action representation that would be more likely to activate the intentional weighting of perceptual dimensions that provide information for online action control. In contrast, word cues were expected to trigger a more abstract action representation that would be less likely to induce intentional weighting. In two experiments, preparing for an action facilitated the processing of targets in an unrelated search task if they differed from distractors on a dimension that provided information for online action control. As predicted, however, this effect was observed only if action preparation was signaled by picture cues but not if it was signaled by word cues. We conclude that picture cues are more efficient than word cues in activating the intentional weighting of perceptual dimensions, presumably by specifying not only invariant characteristics of the planned action but also the dimensions of action-specific parameters

    Social inclusion of robots depends on the way a robot is presented to observers

    Get PDF
    Abstract Research has shown that people evaluate others according to specific categories. As this phenomenon seems to transfer from human–human to human–robot interactions, in the present study we focused on (1) the degree of prior knowledge about technology, in terms of theoretical background and technical education, and (2) intentionality attribution toward robots, as factors potentially modulating individuals' tendency to perceive robots as social partners. Thus, we designed a study where we asked two samples of participants varying in their prior knowledge about technology to perform a ball-tossing game, before and after watching a video where the humanoid iCub robot was depicted either as an artificial system or as an intentional agent. Results showed that people were more prone to socially include the robot after observing iCub presented as an artificial system, regardless of their degree of prior knowledge about technology. Therefore, we suggest that the way the robot was presented, and not the prior knowledge about technology, is likely to modulate individuals' tendency to perceive the robot as a social partner

    ERP markers of action planning and outcome monitoring in human – robot interaction

    Get PDF
    The present study aimed to examine event-related potentials (ERPs) of action planning and outcome monitoring in human-robot interaction. To this end, participants were instructed to perform costly actions (i.e. losing points) to stop a balloon from inflating and to prevent its explosion. They performed the task alone (individual condition) or with a robot (joint condition). Similar to findings from human-human interactions, results showed that action planning was affected by the presence of another agent, robot in this case. Specifically, the early readiness potential (eRP) amplitude was larger in the joint, than in the individual, condition. The presence of the robot affected also outcome perception and monitoring. Our results showed that the P1/N1 complex was suppressed in the joint, compared to the individual condition when the worst outcome was expected, suggesting that the presence of the robot affects attention allocation to negative outcomes of one's own actions. Similarly, results also showed that larger losses elicited smaller feedback-related negativity (FRN) in the joint than in the individual condition. Taken together, our results indicate that the social presence of a robot may influence the way we plan our actions and also the way we monitor their consequences. Implications of the study for the human-robot interaction field are discussed

    I see what you mean

    Get PDF
    The ability to understand and predict others' behavior is essential for successful interactions. When making predictions about what other humans will do, we treat them as intentional systems and adopt the intentional stance, i.e., refer to their mental states such as desires and intentions. In the present experiments, we investigated whether the mere belief that the observed agent is an intentional system influences basic social attention mechanisms. We presented pictures of a human and a robot face in a gaze cuing paradigm and manipulated the likelihood of adopting the intentional stance by instruction: in some conditions, participants were told that they were observing a human or a robot, in others, that they were observing a human-like mannequin or a robot whose eyes were controlled by a human. In conditions in which participants were made to believe they were observing human behavior (intentional stance likely) gaze cuing effects were significantly larger as compared to conditions when adopting the intentional stance was less likely. This effect was independent of whether a human or a robot face was presented. Therefore, we conclude that adopting the intentional stance when observing others' behavior fundamentally influences basic mechanisms of social attention. The present results provide striking evidence that high-level cognitive processes, such as beliefs, modulate bottom-up mechanisms of attentional selection in a top-down manner

    From social brains to social robots: applying neurocognitive insights to human-robot interaction

    Get PDF
    Amidst the fourth industrial revolution, social robots are resolutely moving from fiction to reality. With sophisticated artificial agents becoming ever more ubiquitous in daily life, researchers across different fields are grappling with the questions concerning how humans perceive and interact with these agents and the extent to which the human brain incorporates intelligent machines into our social milieu. This theme issue surveys and discusses the latest findings, current challenges and future directions in neuroscience- and psychology-inspired human–robot interaction (HRI). Critical questions are explored from a transdisciplinary perspective centred around four core topics in HRI: technical solutions for HRI, development and learning for HRI, robots as a tool to study social cognition, and moral and ethical implications of HRI. Integrating findings from diverse but complementary research fields, including social and cognitive neurosciences, psychology, artificial intelligence and robotics, the contributions showcase ways in which research from disciplines spanning biological sciences, social sciences and technology deepen our understanding of the potential and limits of robotic agents in human social life

    Intentional Mindset Toward Robots—Open Questions and Methodological Challenges

    Get PDF
    Natural and effective interaction with humanoid robots should involve social cognitive mechanisms of the human brain that normally facilitate social interaction between humans. Recent research has indicated that the presence and efficiency of these mechanisms in human-robot interaction (HRI) might be contingent on the adoption of a set of attitudes, mindsets, and beliefs concerning the robot's inner machinery. Current research is investigating the factors that influence these mindsets, and how they affect HRI. This review focuses on a specific mindset, namely the "intentional mindset" in which intentionality is attributed to another agent. More specifically, we focus on the concept of adopting the intentional stance toward robots, i.e., the tendency to predict and explain the robots' behavior with reference to mental states. We discuss the relationship between adoption of intentional stance and lower-level mechanisms of social cognition, and we provide a critical evaluation of research methods currently employed in this field, highlighting common pitfalls in the measurement of attitudes and mindsets
    corecore